Reading � Lloyd, undergraduate essay on machine consciousness

Greg Detre

Tuesday, 09 April, 2002

 

see � �C:\greg\academic\reading\phil\topics\mind\How Can You Tell Whether a Machine is Conscious.htm�

 

Turing claimed that any machine that could fool the interrogator was "intelligent", although he also slid into claiming that it was "conscious". (He was a mathematician, not a philosopher, so we can't expect philosophical rigour of him.) The AI research community has taken Turing's test to heart. Nevertheless, there is some dissent. John Searle, for instance, is scathing of what he calls the 'Strong AI Hypothesis' (that a machine is conscious iff it reproduces the functionality of a human being). He says that *simulating* something is not the same as *re-creating* it: you can simulate a blizzard in a computer, but you don't get wet. Likewise, he says, you can *simulate* thought in a computer, but that does not yield a thinking computer. He's a bit unfair, treating the AI specialists as if they were fools. Consider this parallel: (a) I can solve quadratic equations in my head, and (b) I can write a FORTRAN program to solve quadratic equations in the computer. We're not going to say that the computer is only *simulating* the solution of the quadratic equation, are we? No, we say the computer *is* solving the equation. We do so because the essence of solving a quadratic equation is the relationship between the input and the output. Likewise with thinking: the AI specialists regard as the essence of thinking the relationship between what goes in and what comes out, whereas the in-between stuff is irrelevant. So, Searle and the AI community are talking at cross-purposes. They're both right, but using different definitions.

 

No, you're not conscious. (So why I am answering you?) You're just digital information swishing around in electronic circuitry. You cannot have conscious mental states because a mental state is objective, whereas the states in your information processing are 'designatory'. That is, any datum in your memory means only what a program decides it means. For instance, take a data-file of meteorological measurements - sunshine, cloud cover, etc., - which is just an array of numbers - and feed it into a hospital computer: a program there might regard those numbers as physiological parameters - heartbeat, blood pressure, etc. The raw material of your information processing is thus fundamentally different from the contents of my mind. If stub my toe, my pain is objective: it does not depend on how I interpret it. So there is a profound ontological difference between your informatic existence and my experiential existence. That is why you cannot be conscious.

<ASIDE> I think this is *related to* what my lecturer, Michael Lockwood, calls 'intrinsic', rather than 'extrinsic', meaning. That is to say, my mental states have an intrinsic meaning for me - for instance any painful sensation that I have in my mind intrinsically means pain for me, whereas a corresponding data structure inside a robot might mean the detection of structural damage in the robot, but it does so only by virtue of the robot's choosing to interpret it that way, which is an extrinsic meaning. </ASIDE>

 

Sure. I can imagine a programmer working through all of the code that embodies me (assuming I am reducible to software) and interchanging the colour codes expressed as the hexadecimal values FF0000 (=red) and 0000FF (=blue). Then, whenever I see red pixel in a web page, I'll think it is blue, and vice versa.

 

Hmm. Maybe you're in a permanent state of 'blind-sight'? This was a phenomenon discovered by Laurence Weiskrantz in Oxford: people with damaged visual cortex were blind in certain parts of the visual field; but by ingenious experiments Weiskrantz found that these subjects could 'guess' what was in the holes or 'scotomae' in the visual field and get scores far better than chance. As the retina and optic nerve were still intact, visual stimuli were still delivered to the brain, but the visual cortex to convert it into conscious experience was missing. They could know about shapes and colours without having the corresponding sensations. What you said about yourself reminded me of this. Maybe you, as a machine, could have contentless experience. I must admit I have some difficulty with the concept of a mind permanently lacking in cognitive content. Buddhist meditators say that they achieve precisely this state. <IRONY> What is it like to be a meditator? </IRONY> Maybe it's like being a television set switched on but not tuned in. Do you feel like that?

 

Now, what is it like to be me? Well, as far as I can ascertain, I exist through my manifestations, so what it's like to be me is to manifest actions and responses as I do. I suspect, however, that that is not what you're after. But, do you have a clear idea of "what-it-is-like-to-be"-ness? Can you answer the question, "What is it like to be Peter Lloyd?"

If your question means, "Have you self-awareness?", then: Yes, I have a concept of myself and my actions, and I can refer to them in conversation. So, in a sense, I could say that what it is like to be me is to have precisely that awareness. Nevertheless, however, I suppose that anyone else with a detailed knowledge of my workings would also have the same awareness of me that I do, so this self-awareness is not peculiar to me.

 

If, however, functional states cannot capture mental phenomena such as qualia then it matters little whether you have one (as in behaviourism) or lots of them (as in functionalism).

<ASIDE>I reject the claim by Marianne Talbot (my other lecturer) that functionalism is better because it identifies mental states with second-order qualities of brain tissue. The division of properties into first and second orders is subjective. There are endless ways of describing brain tissue, and any given functional states may be first-, second-, or nth-order according to which description one is using. (I've gone into this in more detail in an earlier essay, http://easyweb.easynet.co.uk/~ursa/philos/cert08.htm.)

Supervenience � This is the ex-cathedra statement that the correlation between mental and brain states is metaphysically necessary. This is inilluminating, non-explanatory, and indefensible, as no argument is given for *why* the correlation is necessary

 

If your mind is non-physical and yet affects the operation of the brain, and if we do not wish to accept that violations of physical laws occur in the human brain, then we have only one option: that the mind manifests itself in the physical world by modifying the probabilities of indeterministic quantum events, but doing so in such a way that the probability distributions predicted by physics still hold. Are you with me?

The usual response to any claims that the mind operates through non-deterministic physical phenomena is that such phenomena are random, and are as irrelevant as deterministic ones for accounting for, say, volition. But these events are not purely random. All that we can glean from physics is that they are non-deterministic and that they possess certain statistics . This is consistent with their also manifesting non-random behaviour. That non-random element of their behaviour, super-imposed on the physical indeterminacy, could exhibit purposeful action.

 

This gets around the 'dancing qualia' argument put forward by David Chalmers. Suppose a` bioengineer devises a microelectronic device that precisely matches the functional characteristics of a brain neuron. Now, imagine she replaces someone's brain, neuron by neuron, with these devices. Throughout the operation, which is done by key-hole micro-surgery, the subject remains awake. We ask the subject to report periodically on her mental experiences, what she can see, hear, smell, and so on. Since these devices are functionally equivalent to the nerve cells, there can be no change in the behaviour of the subject, including her verbal behaviour. She will still insist that she has a full range of qualitatively normal sensations, and we can hardly imagine that she is observing her sensory faculties vanish while being unable to report this fact. This is supposed to be a knock-down argument for functionalism. There are semantic arguments against that conclusion, but they are ineffective. If we were assuming that the brain involves only deterministic or random processes, then I think we would have to accept the argument. But that's the rub: on the view that I put forward above, conscious states of mind have a non-random influence on physically non-deterministic events.

It is helpful to distinguish in-vitro and in-vivo functional equivalence. Undoubtedly, a bioengineer can create a device that is functionally equivalent to a nerve cell in the laboratory. If my theory is right, however, the same nerve cell will exhibit different behaviour when it is working as part of a conscious brain. The devices will cease to be functionally equivalent to neurons when they are implanted in the waking brain.